我们介绍RLDS(强化学习数据集),一个生态系统,用于在连续决策(SDM)的上下文中记录,重播,操纵,注释和共享数据,包括加强学习(RL),从演示,离线RL或I模仿学习学习。 RLDS不仅能够再现现有的研究和轻松生成新数据集,而且还加速了新的研究。通过提供标准和无损的数据集格式,它可以在更广泛的任务中快速测试新的算法。 RLDS生态系统使数据集很容易在没有任何信息丢失的情况下共享数据集,并且在将各种数据处理管道应用于大集的数据集时,在底层原始格式不可知。此外,RLD提供了用于收集由合成代理或人类生成的数据的工具,以及检查和操纵收集的数据。最终,与TFD的集成有助于与研究界共享RL数据集。
translated by 谷歌翻译
Vehicle trajectory data has received increasing research attention over the past decades. With the technological sensing improvements such as high-resolution video cameras, in-vehicle radars and lidars, abundant individual and contextual traffic data is now available. However, though the data quantity is massive, it is by itself of limited utility for traffic research because of noise and systematic sensing errors, thus necessitates proper processing to ensure data quality. We draw particular attention to extracting high-resolution vehicle trajectory data from video cameras as traffic monitoring cameras are becoming increasingly ubiquitous. We explore methods for automatic trajectory data reconciliation, given "raw" vehicle detection and tracking information from automatic video processing algorithms. We propose a pipeline including a) an online data association algorithm to match fragments that are associated to the same object (vehicle), which is formulated as a min-cost network flow problem of a graph, and b) a trajectory reconciliation method formulated as a quadratic program to enhance raw detection data. The pipeline leverages vehicle dynamics and physical constraints to associate tracked objects when they become fragmented, remove measurement noise on trajectories and impute missing data due to fragmentations. The accuracy is benchmarked on a sample of manually-labeled data, which shows that the reconciled trajectories improve the accuracy on all the tested input data for a wide range of measures. An online version of the reconciliation pipeline is implemented and will be applied in a continuous video processing system running on a camera network covering a 4-mile stretch of Interstate-24 near Nashville, Tennessee.
translated by 谷歌翻译
To apply federated learning to drug discovery we developed a novel platform in the context of European Innovative Medicines Initiative (IMI) project MELLODDY (grant n{\deg}831472), which was comprised of 10 pharmaceutical companies, academic research labs, large industrial companies and startups. The MELLODDY platform was the first industry-scale platform to enable the creation of a global federated model for drug discovery without sharing the confidential data sets of the individual partners. The federated model was trained on the platform by aggregating the gradients of all contributing partners in a cryptographic, secure way following each training iteration. The platform was deployed on an Amazon Web Services (AWS) multi-account architecture running Kubernetes clusters in private subnets. Organisationally, the roles of the different partners were codified as different rights and permissions on the platform and administrated in a decentralized way. The MELLODDY platform generated new scientific discoveries which are described in a companion paper.
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
这项工作开发了具有严格效率的新算法,可确保无限的地平线模仿学习(IL)具有线性函数近似而无需限制性相干假设。我们从问题的最小值开始,然后概述如何从优化中利用经典工具,尤其是近端点方法(PPM)和双平滑性,分别用于在线和离线IL。多亏了PPM,我们避免了在以前的文献中出现在线IL的嵌套政策评估和成本更新。特别是,我们通过优化单个凸的优化和在成本和Q函数上的平稳目标来消除常规交替更新。当不确定地解决时,我们将优化错误与恢复策略的次级优势联系起来。作为额外的奖励,通过将PPM重新解释为双重平滑以专家政策为中心,我们还获得了一个离线IL IL算法,该算法在所需的专家轨迹方面享有理论保证。最后,我们实现了线性和神经网络功能近似的令人信服的经验性能。
translated by 谷歌翻译
招聘和大学录取等许多申请涉及申请人的评估和选择。这些任务在根本上是困难的,并且需要从多个不同方面(我们称为“属性”)结合证据。在这些应用程序中,申请人的数量通常很大,一个常见的做法是以分布式方式将任务分配给多个评估人员。具体而言,在经常使用的整体分配中,每个评估者都会分配申请人的子集,并要求评估其分配的申请人的所有相关信息。但是,这样的评估过程受到诸如错误校准的问题的约束(评估人员仅见一小部分申请人,并且可能没有良好的相对质量感)和歧视(评估者受到有关申请人无关的信息的影响)。我们确定基于属性的评估允许替代分配方案。具体而言,我们考虑分配每个评估者更多的申请人,但每个申请人的属性更少,称为分割分配。我们通过理论和实验方法比较了分段分配与几个维度的整体分配。我们在这两种方法之间建立了各种折衷方案,并确定一种方法在其中一种方法比另一种方法更准确地评估。
translated by 谷歌翻译
将人类运营商和虚拟代理(机器人)相结合到有效的混合系统中的前景是为客户提供适当的客户服务的前景,这是有希望而又具有挑战性的。当机器人无法提供适当的服务并在他们喜欢与人类运营商互动时,混合系统会减少客户的挫败感。此外,我们表明,可以通过使虚拟代理能够向人类操作员逐步学习来降低建立和维护此类虚拟代理的成本和努力。我们采用排队理论来确定控制此类混合系统行为和效率的关键参数,并确定应优化应进行优化以改善服务的主要参数。我们正式证明并在广泛的模拟和用户研究中证明,有了适当的选择,这种混合系统能够增加服务客户的数量,同时减少他们的预期等待时间和增加满意度。
translated by 谷歌翻译
单词如何改变他们的含义?尽管语义演化是由多种不同的因素(包括语言,社会和技术方面的)驱动的,但我们发现,有一项法律在五种主要的印欧语语言中普遍存在:这种语义演化非常宽容。使用控制基础对称性的直觉分布语义嵌入的自动管道,我们表明单词遵循含义空间中的随机轨迹,具有异常扩散指数$ \ alpha = 0.45 \ pm 0.05 \ pm 0.05 \ pm 0.05 $ 0.05 $,相反,与扩散的粒子相比之下\ alpha = 1 $。随机化方法表明,在语义变化方向上保留时间相关性是为了恢复强烈延伸的行为所必需的。但是,变化大小的相关性也起着重要作用。我们此外表明,在数据分析和解释中,强大的亚扩散是一种强大的现象,例如选择拟合位移平均值或平均单个单词轨迹的最佳拟合指数的选择。
translated by 谷歌翻译
5G建筑和深度学习的融合在无线通信和人工智能领域都获得了许多研究兴趣。这是因为深度学习技术已被确定为构成5G体系结构的5G技术的潜在驱动力。因此,关于5G架构和深度学习的融合进行了广泛的调查。但是,大多数现有的调查论文主要集中于深度学习如何与特定的5G技术融合,因此,不涵盖5G架构的全部范围。尽管最近有一份调查文件似乎很强大,但对该论文的评论表明,它的结构不佳,无法专门涵盖深度学习和5G技术的收敛性。因此,本文概述了关键5G技术和深度学习的融合。讨论了这种融合面临的挑战。此外,还讨论了对未来6G体系结构的简要概述,以及如何与深度学习进行融合。
translated by 谷歌翻译
我们研究了在偏见的可观察性模型下,在对抗性匪徒问题中的在线学习问题,称为政策反馈。在这个顺序决策问题中,学习者无法直接观察其奖励,而是看到由另一个未知策略并行运行的奖励(行为策略)。学习者必须在这种情况下面临另一个挑战:由于他们的控制之外的观察结果有限,学习者可能无法同样估算每个政策的价值。为了解决这个问题,我们提出了一系列算法,以保证任何比较者政策与行为政策之间的自然不匹配概念的范围,从而提高了对观察结果良好覆盖的比较者的绩效。我们还为对抗性线性上下文匪徒的设置提供了扩展,并通过一组实验验证理论保证。我们的关键算法想法是调整最近在非政策强化学习背景下流行的悲观奖励估计量的概念。
translated by 谷歌翻译